Skip to content

Add a dedicated OpenAI-compatible LLM adapter#1895

Open
jimmyzhuu wants to merge 13 commits into
Zipstack:mainfrom
jimmyzhuu:codex/openai-compatible-llm-adapter
Open

Add a dedicated OpenAI-compatible LLM adapter#1895
jimmyzhuu wants to merge 13 commits into
Zipstack:mainfrom
jimmyzhuu:codex/openai-compatible-llm-adapter

Conversation

@jimmyzhuu
Copy link
Copy Markdown

@jimmyzhuu jimmyzhuu commented Apr 4, 2026

Summary

This PR adds a dedicated OpenAI Compatible LLM adapter for OpenAI-style chat completion endpoints that are not the official OpenAI service.

The implementation is intentionally small in scope:

  • adds a new OpenAI Compatible LLM adapter backed by LiteLLM's custom_openai path
  • keeps the existing OpenAI adapter unchanged
  • adds adapter registration and JSON schema
  • adds focused tests for registration, model normalization, schema loading, and usage recording
  • documents the new adapter in the README

Why

Users may already have access to OpenAI-compatible endpoints behind a private gateway or third-party provider, but the current OpenAI adapter is specifically shaped around official OpenAI semantics.

Using a separate adapter keeps those semantics explicit and avoids broadening the meaning of the existing OpenAI adapter.

Refs #1894
Refs #856
Refs #1443

Scope

This PR is limited to:

  • LLM only
  • no embedding changes
  • no x2text / OCR changes
  • no changes to the existing OpenAI adapter behavior

Notes

LLM._record_usage now prefers provider-reported prompt_tokens when they are present in the usage payload.

If prompt_tokens are missing, _record_usage still falls back to LiteLLM token estimation. If that estimation raises, it now logs a warning and records 0 prompt tokens for usage audit instead of bubbling the exception up after a successful LLM call.

This behavior change is in the shared _record_usage path, so it applies to every SDK1 LLM adapter that uses it, not just custom_openai.

This keeps successful LLM calls from failing at the usage-audit / billing step while preserving the current pricing semantics in this PR.

Validation

  • UV_SKIP_WHEEL_FILENAME_CHECK=1 uv run pytest tests/test_openai_compatible_adapter.py
  • UV_SKIP_WHEEL_FILENAME_CHECK=1 uv run ruff check src/unstract/sdk1/adapters/base1.py src/unstract/sdk1/adapters/llm1/__init__.py src/unstract/sdk1/adapters/llm1/openai_compatible.py src/unstract/sdk1/llm.py tests/test_openai_compatible_adapter.py

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 4, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

This PR adds support for OpenAI-compatible LLM providers (e.g., via LiteLLM) by introducing a new adapter type, parameter validation with model normalization, improved usage token handling, and corresponding tests and configuration.

Changes

OpenAI-Compatible LLM Adapter Feature

Layer / File(s) Summary
Parameter Definition
unstract/sdk1/src/unstract/sdk1/adapters/base1.py
OpenAICompatibleLLMParameters class adds api_key and api_base fields with validate() and validate_model() methods that prefix models with custom_openai/ and normalize empty API keys to None.
BaseAdapter Enhancement
unstract/sdk1/src/unstract/sdk1/adapters/base1.py
BaseAdapter gains SCHEMA_PATH class variable and get_json_schema() now respects custom schema paths for adapters with overridden paths.
Adapter Implementation
unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
OpenAICompatibleLLMAdapter class subclasses parameters and BaseAdapter, exposing fixed ID, metadata, name "OpenAI Compatible", provider "custom_openai", description, and icon through static methods.
Module Registration & Exports
unstract/sdk1/src/unstract/sdk1/adapters/llm1/__init__.py
OpenAICompatibleLLMAdapter is imported and added to __all__ for public module exposure.
Usage Recording
unstract/sdk1/src/unstract/sdk1/llm.py
_record_usage() now prefers provider-supplied prompt_tokens from usage_data; falls back to token_counter() if absent and logs a warning (with zero token count recorded) if token counting fails.
Configuration & Schema
unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/openai_compatible.json
JSON Schema defines required fields (adapter_name, api_base) and optional fields (api_key, model, max_tokens, max_retries, timeout) with types and descriptions.
Documentation & Tests
README.md, unstract/sdk1/tests/test_openai_compatible_adapter.py
README updated to list "OpenAI Compatible" as a supported provider; comprehensive test suite validates adapter registration, parameter normalization, schema structure, metadata, and usage token recording edge cases.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The PR description provides comprehensive coverage of What, Why, and Scope. However, it lacks several required template sections: 'How' details, breaking changes assessment, database migrations, env config, relevant docs, related issues clarity, dependencies versions, and testing notes. Fill in the missing template sections (How, breaking changes, databases, env config, docs, issues, dependencies, and testing notes) to provide complete context for reviewers and maintainers.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding a dedicated OpenAI-compatible LLM adapter. It directly matches the changeset's primary objective.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 4, 2026

Greptile Summary

This PR introduces a new OpenAICompatibleLLMAdapter backed by LiteLLM's custom_openai path for OpenAI-style endpoints that are not the official OpenAI service, alongside a shared change to LLM._record_usage that prefers provider-reported prompt_tokens and gracefully logs failures from the LiteLLM token-counter fallback instead of propagating exceptions.

  • New adapter: OpenAICompatibleLLMAdapter with Pydantic parameter class, JSON schema, icon, registration, and focused unit tests.
  • _record_usage behaviour change (shared path, affects all SDK1 adapters): uses reported prompt_tokens when present, falls back to litellm.token_counter() only when they are absent or zero, and logs a warning instead of raising on fallback failure.
  • BaseAdapter.get_json_schema now supports an explicit SCHEMA_PATH class variable so adapters can decouple their schema filename from their get_provider() return value.

Confidence Score: 4/5

The new adapter is functionally sound, but two open concerns from prior review threads remain unresolved in the submitted code.

The adapter implementation, schema, and tests are clean. The _record_usage graceful-degradation change is well-tested and the fallback logic is correct. However, OpenAICompatibleLLMParameters.validate() still writes directly into the caller's dict before constructing the Pydantic model, and the model field is still absent from the JSON schema required array — both flagged in earlier review rounds. Until those two items are closed, the change carries a latent mutation side-effect on any call-site that re-reads the original metadata dict after validation.

unstract/sdk1/src/unstract/sdk1/adapters/base1.py (OpenAICompatibleLLMParameters.validate mutation) and unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/openai_compatible.json (model not in required)

Important Files Changed

Filename Overview
unstract/sdk1/src/unstract/sdk1/adapters/base1.py Adds OpenAICompatibleLLMParameters; validate() still mutates the caller's dict and model is absent from the JSON schema required array — both already flagged in open review threads.
unstract/sdk1/src/unstract/sdk1/llm.py _record_usage now prefers reported prompt_tokens and degrades gracefully on token_counter failure; warning message text matches the test assertion correctly.
unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py Clean adapter implementation using SCHEMA_PATH class var, proper MRO (parameters first, then BaseAdapter), and distinct provider/icon values.
unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/openai_compatible.json Schema is well-formed; model field is still absent from the required array (flagged in previous thread).
unstract/sdk1/tests/test_openai_compatible_adapter.py Tests cover registration, model normalisation, schema loading, and usage recording; magic stub and schema assertion concerns from previous threads are now addressed.
unstract/sdk1/src/unstract/sdk1/adapters/llm1/init.py Registers OpenAICompatibleLLMAdapter correctly alongside existing adapters.

Sequence Diagram

sequenceDiagram
    participant UI as UI / Caller
    participant LLM as LLM.__init__
    participant Adapter as OpenAICompatibleLLMAdapter
    participant Params as OpenAICompatibleLLMParameters
    participant LiteLLM as LiteLLM

    UI->>LLM: LLM(adapter_id, adapter_metadata)
    LLM->>Adapter: validate(adapter_metadata)
    Adapter->>Params: validate_model(adapter_metadata)
    Params-->>Adapter: "custom_openai/{model}"
    Adapter->>Params: "OpenAICompatibleLLMParameters(**metadata).model_dump()"
    Adapter-->>LLM: kwargs (model, api_base, api_key, ...)
    LLM-->>UI: LLM instance

    UI->>LLM: complete(prompt)
    LLM->>Adapter: "validate({**self.kwargs, **kwargs})"
    Adapter-->>LLM: completion_kwargs
    LLM->>LiteLLM: "litellm.completion(messages, **completion_kwargs)"
    LiteLLM-->>LLM: response (with usage)

    LLM->>LLM: _record_usage(model, messages, usage, complete)
    alt usage has prompt_tokens
        LLM->>LLM: use reported prompt_tokens directly
    else prompt_tokens absent or zero
        LLM->>LiteLLM: token_counter(model, messages)
        alt token_counter succeeds
            LiteLLM-->>LLM: estimated prompt_tokens
        else token_counter raises
            LLM->>LLM: log warning, record 0 prompt_tokens
        end
    end
    LLM->>LiteLLM: cost_per_token(model, prompt_tokens, completion_tokens)
    LiteLLM-->>LLM: (prompt_cost, compl_cost)
    LLM->>LLM: append to _pending_usage
    LLM-->>UI: response LLMResponseCompat
Loading

Reviews (8): Last reviewed commit: "Reject empty model string in OpenAICompa..." | Re-trigger Greptile

Comment thread unstract/sdk1/tests/test_openai_compatible_adapter.py Outdated
Comment thread unstract/sdk1/tests/test_openai_compatible_adapter.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
unstract/sdk1/src/unstract/sdk1/llm.py (1)

542-557: Avoid unconditional token estimation when usage already includes prompt tokens.

This currently computes token_counter() even when provider usage already has prompt tokens, which can create repeated warnings/noise for unmapped models without improving recorded usage.

♻️ Proposed refinement
-        try:
-            prompt_tokens = token_counter(model=model, messages=messages)
-        except Exception as e:
-            prompt_tokens = 0
-            logger.warning(
-                "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens: %s",
-                model,
-                llm_api,
-                e,
-            )
         usage_data: Mapping[str, int] = usage or {}
+        prompt_tokens = usage_data.get("prompt_tokens")
+        if prompt_tokens is None:
+            try:
+                prompt_tokens = token_counter(model=model, messages=messages)
+            except Exception as e:
+                prompt_tokens = 0
+                logger.warning(
+                    "[sdk1][LLM][%s][%s] Failed to estimate prompt tokens: %s",
+                    model,
+                    llm_api,
+                    e,
+                )
         all_tokens = TokenCounterCompat(
-            prompt_tokens=usage_data.get("prompt_tokens", 0),
+            prompt_tokens=usage_data.get("prompt_tokens", prompt_tokens or 0),
             completion_tokens=usage_data.get("completion_tokens", 0),
             total_tokens=usage_data.get("total_tokens", 0),
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` around lines 542 - 557, The code
unconditionally calls token_counter(model, messages) even when usage already
contains prompt token counts; change the logic in the block around token_counter
and TokenCounterCompat so you first check usage (usage_data = usage or {}) and
if usage_data.get("prompt_tokens") is present use that value for prompt_tokens
instead of calling token_counter; only call token_counter(model, messages)
inside the try/except when usage_data lacks prompt_tokens, preserving the
existing exception handling and the logger.warning path, and then construct
TokenCounterCompat using the values from usage_data (falling back to the
estimated prompt_tokens when used).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json`:
- Around line 15-20: The schema for the "api_key" property currently only allows
a string which fails when runtime metadata contains null; update the "api_key"
entry in the JSON schema (the "api_key" property in custom_openai.json) to
permit null values by changing its type to accept both string and null (or add a
nullable:true equivalent) so stored configs with null pass validation and
editing flows.

---

Nitpick comments:
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 542-557: The code unconditionally calls token_counter(model,
messages) even when usage already contains prompt token counts; change the logic
in the block around token_counter and TokenCounterCompat so you first check
usage (usage_data = usage or {}) and if usage_data.get("prompt_tokens") is
present use that value for prompt_tokens instead of calling token_counter; only
call token_counter(model, messages) inside the try/except when usage_data lacks
prompt_tokens, preserving the existing exception handling and the logger.warning
path, and then construct TokenCounterCompat using the values from usage_data
(falling back to the estimated prompt_tokens when used).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bf841637-54b7-4802-9156-7f56e899ca54

📥 Commits

Reviewing files that changed from the base of the PR and between 7983c98 and 5090773.

📒 Files selected for processing (7)
  • README.md
  • unstract/sdk1/src/unstract/sdk1/adapters/base1.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/__init__.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json
  • unstract/sdk1/src/unstract/sdk1/llm.py
  • unstract/sdk1/tests/test_openai_compatible_adapter.py

@jimmyzhuu
Copy link
Copy Markdown
Author

Addressed the review follow-ups.

  • allowed api_key = null in the OpenAI-compatible schema
  • avoided redundant token_counter() calls when provider usage already includes prompt_tokens
  • tightened the tests around both cases
  • added ERNIE / Baidu Qianfan as schema examples while keeping the adapter generic

Validation re-run:

  • UV_SKIP_WHEEL_FILENAME_CHECK=1 uv run pytest tests/test_openai_compatible_adapter.py
  • UV_SKIP_WHEEL_FILENAME_CHECK=1 uv run ruff check src/unstract/sdk1/llm.py tests/test_openai_compatible_adapter.py

@jimmyzhuu
Copy link
Copy Markdown
Author

Gentle follow-up on this PR in case it slipped through the queue. When someone has bandwidth, I would really appreciate a review. Happy to make any follow-up changes quickly. Thanks!

Comment thread unstract/sdk1/src/unstract/sdk1/llm.py Outdated
@jaseemjaskp jaseemjaskp self-requested a review April 23, 2026 09:01
Copy link
Copy Markdown
Contributor

@jaseemjaskp jaseemjaskp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Toolkit — consolidated findings

Automated review aggregating six specialist agents (code-reviewer, code-simplifier, silent-failure-hunter, type-design-analyzer, pr-test-analyzer, comment-analyzer). No blocking defects; the adapter follows existing sibling-adapter conventions and the scope is appropriately narrow.

High-signal items worth addressing before merge

  • llm.py:556 — redundant/confusing fallback: prompt_tokens is already resolved at line 543 with an explicit estimation branch, so the usage_data.get("prompt_tokens", prompt_tokens or 0) expression double-handles the default and, worse, silently coerces an explicit None from usage_data to 0 without logging.
  • llm.py:547 — broad except Exception combined with a warning that says "failed to estimate" but not "recording 0 tokens" means billing/audit can silently under-report. Narrow the exception and either use logger.exception or rewrite the message to name the consequence.
  • base1.py:232 / schema api_base — the Pydantic type is plain str; URL shape lives only in the JSON schema, so direct construction accepts garbage. Consider HttpUrl / a field_validator.
  • base1.py:239 — prefix logic is only invoked by the validate classmethod. AzureOpenAILLMParameters uses @model_validator(mode="before") which cannot be bypassed; mirroring that tightens the invariant.
  • custom_openai.jsonapi_base is listed as required yet ships with a placeholder default URL, which lets users save an unchanged form and hit 404s at request time instead of validation errors. Vendor-specific examples (ERNIE-4.0-8K (Baidu Qianfan), qianfan.baidubce.com) are prone to rot and should be generic.
  • test_openai_compatible_adapter.py — the "tolerates unmapped models" test only asserts push_usage_data.assert_called_once(); it does not verify prompt_tokens=0 was actually pushed, so a regression silently pushing None or crashing inside TokenCounterCompat would still pass.

Suggested follow-ups (non-blocking)

  • Add tests for: api_base missing (Pydantic ValidationError), usage=None, usage["prompt_tokens"]=None, and the success branch of token_counter.
  • Drop the unused @lru_cache on _load_llm_module and the dead _load_llm_class.
  • Replace the tautological get_description() / metadata description with user-facing copy that distinguishes this adapter from OpenAILLMAdapter.

Inline comments below flag each item at its exact line with a concrete fix suggestion.

Comment thread unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py Outdated
Comment thread unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py Outdated
@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
unstract/sdk1/tests/test_openai_compatible_adapter.py (3)

102-192: LGTM — good coverage of the three _record_usage branches.

Tests exercise: (a) provider-supplied prompt_tokens bypasses token_counter, (b) token_counter raising falls back to 0 with a warning, and (c) prompt_tokens=None triggers estimation. The use of __new__ to bypass __init__ and the targeted patch.object(llm_module, ...) are appropriate here. Nice to see the warning-message assertion on line 162 pinning the audit-visible text.

One small suggestion: assert mock_warning is called exactly once and with model/llm_api substituted into the format string, to catch regressions that accidentally change the log signature.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 102 -
192, Update the
test_record_usage_tolerates_unmapped_models_without_prompt_tokens test to assert
the warning logger was called exactly once and that the warning message includes
the model ("custom_openai/gateway-model") and llm_api ("complete") values;
locate the test function and the mock_warning (patched via
patch.object(llm_module.logger, "warning")) and after calling llm._record_usage
add assertions that mock_warning.assert_called_once() and that the call_args
contains both the model and llm_api strings in the formatted warning message to
catch signature regressions.

18-31: Minor: lru_cache around import_module is largely redundant.

sys.modules already caches modules after first import, so the lru_cache(maxsize=1) only saves the patch.dict context-manager overhead. Leaving it is harmless, but on second and subsequent calls the magic stub will not be re-installed (because the cached branch returns early), so any test that newly triggers import magic after the first call would see the real module. Not a problem today, but worth documenting with a short comment to prevent surprise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 18 - 31,
The `@lru_cache` on _load_llm_module() prevents the patch.dict stub for "magic"
from being re-applied on subsequent calls, which can lead to surprising behavior
if tests later import the real magic module; either remove the `@lru_cache`
decorator or (preferred) keep it but add a brief comment inside _load_llm_module
explaining that sys.modules already caches imports and that the cached result
means the "magic" stub will not be re-installed on later calls so tests should
call this once or manage stubbing themselves — reference the _load_llm_module
function and the patch.dict usage when adding the comment.

34-35: Dead helper: _load_llm_class is never called.

Every test uses _load_llm_module().LLM directly (e.g., lines 104, 135, 167). Consider removing _load_llm_class or using it in place of the inline llm_module.LLM lookups for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/tests/test_openai_compatible_adapter.py` around lines 34 - 35,
The helper function _load_llm_class is unused (dead code); either remove
_load_llm_class entirely or replace direct usages of _load_llm_module().LLM in
tests (e.g., the inline lookups at places that call llm_module.LLM) with calls
to _load_llm_class() for consistency. Locate the definition of _load_llm_class
and the test files referencing _load_llm_module().LLM and either delete the
unused _load_llm_class function or update those tests to call _load_llm_class()
instead, ensuring imports and type annotations still match.
unstract/sdk1/src/unstract/sdk1/adapters/base1.py (1)

234-242: Minor: validate() mutates the caller's dict.

Lines 236 and 241 write back into adapter_metadata (same pattern as OpenAILLMParameters, but unlike VertexAILLMParameters which copies first via {**adapter_metadata}). Given LLM.complete() calls self.adapter.validate({**self.kwargs, **kwargs}) (a fresh dict), there's no current bug — but if a future caller passes a long-lived dict, the api_key would be mutated in place and the model prefix would double-rewrite on a second call (the startswith("custom_openai/") guard in validate_model mitigates the latter).

Optional: copy first for defensive hygiene, matching the VertexAILLMParameters.validate() style.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py` around lines 234 - 242,
The validate() method currently mutates the incoming adapter_metadata dict (it
writes to adapter_metadata["model"] and adapter_metadata["api_key"]); to avoid
in-place side effects make a shallow copy first (e.g., metadata =
{**adapter_metadata}) and perform all modifications against that copy before
passing it to OpenAICompatibleLLMParameters.validate_model and constructing
OpenAICompatibleLLMParameters(**metadata).model_dump(); keep references to the
same symbols (validate, OpenAICompatibleLLMParameters.validate_model,
OpenAICompatibleLLMParameters, adapter_metadata) so the change is local and
preserves existing behavior while preventing caller dict mutation.
unstract/sdk1/src/unstract/sdk1/llm.py (1)

557-557: Minor: prompt_tokens or 0 also zeroes out legitimate 0 from provider.

If a provider ever reports usage.prompt_tokens == 0 (unusual, but possible for zero-content requests or certain gateways), the truthiness check collapses it the same as None. Given the prompt_tokens is None branch already assigns an int (or 0 on exception), this or 0 is only needed to satisfy the type checker. A more precise form:

Proposed tweak
-    all_tokens = TokenCounterCompat(
-        prompt_tokens=prompt_tokens or 0,
+    all_tokens = TokenCounterCompat(
+        prompt_tokens=prompt_tokens if prompt_tokens is not None else 0,
         completion_tokens=usage_data.get("completion_tokens", 0),
         total_tokens=usage_data.get("total_tokens", 0),
     )

Low-impact since 0 vs None ends up the same in the audit row, but semantically cleaner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@unstract/sdk1/src/unstract/sdk1/llm.py` at line 557, Replace the truthiness
fallback that zeroes out legitimate zero values: instead of using
"prompt_tokens=prompt_tokens or 0" keep the explicit None-check so only None
becomes 0 (e.g., use a conditional expression that assigns prompt_tokens if
prompt_tokens is not None else 0). Locate the occurrence of the
"prompt_tokens=prompt_tokens or 0" assignment and change it to an explicit
None-check for the variable prompt_tokens so a reported 0 remains 0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Around line 543-560: The current catch in the prompt token estimation around
token_counter (used when building TokenCounterCompat) silently sets
prompt_tokens=0; update this to (1) narrow the except to only expected errors
from the estimator (e.g., KeyError/ValueError and litellm-specific exceptions
raised by token_counter) so unexpected errors still propagate, and (2) add a
sentinel field to the usage payload (e.g., prompt_tokens_source or
estimation_failed) before calling Audit().push_usage_data to mark that prompt
tokens were estimated/failed, and/or increment an ops metric/counter when the
fallback path occurs; reference the token_counter call, TokenCounterCompat
construction, Audit().push_usage_data, and the existing logger to emit a clear
warning and metric.

---

Nitpick comments:
In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py`:
- Around line 234-242: The validate() method currently mutates the incoming
adapter_metadata dict (it writes to adapter_metadata["model"] and
adapter_metadata["api_key"]); to avoid in-place side effects make a shallow copy
first (e.g., metadata = {**adapter_metadata}) and perform all modifications
against that copy before passing it to
OpenAICompatibleLLMParameters.validate_model and constructing
OpenAICompatibleLLMParameters(**metadata).model_dump(); keep references to the
same symbols (validate, OpenAICompatibleLLMParameters.validate_model,
OpenAICompatibleLLMParameters, adapter_metadata) so the change is local and
preserves existing behavior while preventing caller dict mutation.

In `@unstract/sdk1/src/unstract/sdk1/llm.py`:
- Line 557: Replace the truthiness fallback that zeroes out legitimate zero
values: instead of using "prompt_tokens=prompt_tokens or 0" keep the explicit
None-check so only None becomes 0 (e.g., use a conditional expression that
assigns prompt_tokens if prompt_tokens is not None else 0). Locate the
occurrence of the "prompt_tokens=prompt_tokens or 0" assignment and change it to
an explicit None-check for the variable prompt_tokens so a reported 0 remains 0.

In `@unstract/sdk1/tests/test_openai_compatible_adapter.py`:
- Around line 102-192: Update the
test_record_usage_tolerates_unmapped_models_without_prompt_tokens test to assert
the warning logger was called exactly once and that the warning message includes
the model ("custom_openai/gateway-model") and llm_api ("complete") values;
locate the test function and the mock_warning (patched via
patch.object(llm_module.logger, "warning")) and after calling llm._record_usage
add assertions that mock_warning.assert_called_once() and that the call_args
contains both the model and llm_api strings in the formatted warning message to
catch signature regressions.
- Around line 18-31: The `@lru_cache` on _load_llm_module() prevents the
patch.dict stub for "magic" from being re-applied on subsequent calls, which can
lead to surprising behavior if tests later import the real magic module; either
remove the `@lru_cache` decorator or (preferred) keep it but add a brief comment
inside _load_llm_module explaining that sys.modules already caches imports and
that the cached result means the "magic" stub will not be re-installed on later
calls so tests should call this once or manage stubbing themselves — reference
the _load_llm_module function and the patch.dict usage when adding the comment.
- Around line 34-35: The helper function _load_llm_class is unused (dead code);
either remove _load_llm_class entirely or replace direct usages of
_load_llm_module().LLM in tests (e.g., the inline lookups at places that call
llm_module.LLM) with calls to _load_llm_class() for consistency. Locate the
definition of _load_llm_class and the test files referencing
_load_llm_module().LLM and either delete the unused _load_llm_class function or
update those tests to call _load_llm_class() instead, ensuring imports and type
annotations still match.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bb92694d-b745-40e9-8bd9-0bc1fa3628b1

📥 Commits

Reviewing files that changed from the base of the PR and between d3e1cad and f6a2a7d.

⛔ Files ignored due to path filters (1)
  • frontend/public/icons/adapter-icons/OpenAICompatible.png is excluded by !**/*.png
📒 Files selected for processing (5)
  • unstract/sdk1/src/unstract/sdk1/adapters/base1.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json
  • unstract/sdk1/src/unstract/sdk1/llm.py
  • unstract/sdk1/tests/test_openai_compatible_adapter.py
✅ Files skipped from review due to trivial changes (2)
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
  • unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json

Comment thread unstract/sdk1/src/unstract/sdk1/llm.py Outdated
Comment thread unstract/sdk1/src/unstract/sdk1/adapters/base1.py
@jimmyzhuu jimmyzhuu closed this May 6, 2026
@athul-rs
Copy link
Copy Markdown
Contributor

athul-rs commented May 6, 2026

Hi @jimmyzhuu - sorry we let this sit so long without a review, that's on us. The change looks reasonable in scope and the validation looks solid. If you're still interested, we'd be happy to have you reopen it and we'll get a maintainer on it this week. Either way, thanks for the careful work and the patience.
cc: @hari-kuriakose @ritwik-g @jaseemjaskp

@jimmyzhuu jimmyzhuu reopened this May 6, 2026
@jimmyzhuu
Copy link
Copy Markdown
Author

Absolutely — I’d be happy to work on this. Thanks for the suggestion!

@athul-rs athul-rs requested review from athul-rs and pk-zipstack May 6, 2026 08:34
Copy link
Copy Markdown
Contributor

@chandrasekharan-zipstack chandrasekharan-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for the most part, @pk-zipstack please help take a look as well

Comment thread unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json Outdated
Copy link
Copy Markdown
Contributor

@pk-zipstack pk-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment thread unstract/sdk1/src/unstract/sdk1/adapters/llm1/static/custom_openai.json Outdated
Copy link
Copy Markdown
Contributor

@jaseemjaskp jaseemjaskp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. After running the PR review toolkit (code-reviewer, comment-analyzer, pr-test-analyzer, silent-failure-hunter, type-design-analyzer, code-simplifier), all clearly important findings (correctness, billing/audit data integrity, broken contracts) were already raised by prior reviewers (greptile, coderabbitai, jaseemjaskp) and addressed by the author in the latest commits. Remaining items from agent passes are minor or NIT-level (e.g. tightening api_base/model pydantic validators, schema title/filename naming, end-to-end LLM(init) coverage), so I'm not posting them as inline comments. Resolved my three previously-posted threads that the current code addresses (DESCRIPTION constant, dedicated icon, blank api_key coercion).

@athul-rs
Copy link
Copy Markdown
Contributor

athul-rs commented May 6, 2026

@jimmyzhuu The one thing I'd note is that the exception-swallowing path now applies to every adapter, not just custom_openai. I think that's the right tradeoff (a successful LLM call shouldn't fail at the billing step), but flagging it for the merge commit.
Could you call out the _record_usage change explicitly in the PR description?
cc: @hari-kuriakose @pk-zipstack

@jimmyzhuu
Copy link
Copy Markdown
Author

@jimmyzhuu The one thing I'd note is that the exception-swallowing path now applies to every adapter, not just custom_openai. I think that's the right tradeoff (a successful LLM call shouldn't fail at the billing step), but flagging it for the merge commit.我注意到的一点是,现在异常捕获路径适用于所有适配器,而不仅仅是 custom_openai。我认为这是正确的权衡(一个成功的 LLM 调用不应该在计费步骤失败),但为合并提交标记一下。 Could you call out the _record_usage change explicitly in the PR description?你能明确在 PR 描述中提到 _record_usage 的变更吗? cc: @hari-kuriakose @pk-zipstack

@athul-rs Updated the PR description.

Resolve conflict in unstract/sdk1/src/unstract/sdk1/llm.py by taking
main's version of _record_usage. This PR's prompt_tokens fallback is
subsumed by main's existing litellm.token_counter() fallback, and the
TokenCounterCompat / Audit().push_usage_data() shape this PR retained
is dead post-Lookups-V2 refactor (usage now flows through
self._pending_usage).

Update the three _record_usage tests in
test_openai_compatible_adapter.py to assert on the _pending_usage
carrier rather than the removed Audit pattern. Also fix a latent
sys.modules rollback in _load_llm_module that broke litellm attribute
resolution after the patch.dict context exited.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Comment thread unstract/sdk1/src/unstract/sdk1/adapters/base1.py
validate_model previously produced "custom_openai/" for an empty model,
surfacing as a confusing LiteLLM error at call time. Match the existing
GeminiLLMParameters.validate_model pattern: strip whitespace, raise
ValueError on empty input.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
@sonarqubecloud
Copy link
Copy Markdown

class BaseAdapter(ABC):
"""Adapter base class for compatibility with all SDK v1 providers."""

SCHEMA_PATH: ClassVar[str | None] = None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jimmyzhuu @pk-zipstack may be a silly question. Why we had to refactor SCHEMA_PATH as a class variable? I was just wondering if using like this could have some other side effect since we use BaseAdapter as baseclass for other adapter classes?

I mean wouldn't renaming the openai_comptatible.json to custom_openai.json have avoided these changes all together.

@jimmyzhuu If the above works I would suggest doing that and reverting the introduced logic for cls.SCHEMA_PATH. While nothing inherently wrong in doing what you did we want to keep the changes to based class as minimal and none when possible.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ritwik-g's suggestion is correct and is the simpler path.

Looking at the code, the default get_json_schema() in BaseAdapter constructs the schema path as:

f"{os.path.dirname(__file__)}/{cls.get_adapter_type().name.lower()}1/static/{cls.get_provider()}.json"

Since OpenAICompatibleLLMAdapter.get_provider() returns "custom_openai", the default path resolution would look for llm1/static/custom_openai.json. The schema file is named openai_compatible.json instead, which is the only reason SCHEMA_PATH was added to BaseAdapter.

Every other adapter in the static directory follows the convention of naming the JSON file after get_provider(): anthropic.json, bedrock.json, vertex_ai.json, ollama.json, etc. openai_compatible.json is the only outlier.

The fix is straightforward:

  1. Rename openai_compatible.jsoncustom_openai.json
  2. Remove SCHEMA_PATH from BaseAdapter
  3. Remove the SCHEMA_PATH override in OpenAICompatibleLLMAdapter

This keeps the base class untouched and aligns with the naming convention already established by all other adapters.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai can you raise this fix?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@ritwik-g Sure! I'll plan these three changes now.

  • Rename openai_compatible.jsoncustom_openai.json in the static directory
  • Remove SCHEMA_PATH: ClassVar[str | None] = None from BaseAdapter
  • Remove the SCHEMA_PATH override and its get_json_schema usage from OpenAICompatibleLLMAdapter

🏁 Script executed:

#!/bin/bash
# Find the schema JSON file
fd "openai_compatible.json" --type f

# Find OpenAICompatibleLLMAdapter implementation
fd "openai_compatible.py" --type f

# Check for SCHEMA_PATH usage in openai_compatible.py
rg -n "SCHEMA_PATH" --type py

Length of output: 3009


🏁 Script executed:

#!/bin/bash
# Find the schema JSON file location
fd "openai_compatible.json" --type f

# Find OpenAICompatibleLLMAdapter implementation
fd "openai_compatible.py" --type f

# Check for SCHEMA_PATH usage across the codebase
rg -n "SCHEMA_PATH" --type py -A2 -B2

Length of output: 10596


@ritwik-g Here's the full plan for the fix. Four files need to change:

  • Rename openai_compatible.jsoncustom_openai.json so get_json_schema() auto-resolves via get_provider() = "custom_openai"
  • Remove SCHEMA_PATH class variable and the if schema_path is None branch from BaseAdapter in base1.py; also drop the now-unused ClassVar import
  • Remove the SCHEMA_PATH override (and unused os/ClassVar imports) from OpenAICompatibleLLMAdapter
  • Update the test that directly accesses SCHEMA_PATH and checks the old filename

🧬 Code Graph Analysis Results
  • File: unstract/sdk1/src/unstract/sdk1/constants.py
    Lines: 4-14

    class Common:
        METADATA = "metadata"
        MODULE = "module"
        ADAPTER = "adapter"
        SRC_FOLDER = "src"
        ADAPTER_METADATA = "adapter_metadata"
        ICON = "icon"
        ADAPTER_ID = "adapter_id"
        ADAPTER_TYPE = "adapter_type"
        ADAPTER_NAME = "_adapter_name"
        DEFAULT_ERR_MESSAGE = "Something went wrong"
    
  • File: unstract/sdk1/src/unstract/sdk1/constants.py
    Lines: 17-23

    class AdapterTypes(Enum):
        UNKNOWN = "UNKNOWN"
        LLM = "LLM"
        EMBEDDING = "EMBEDDING"
        VECTOR_DB = "VECTOR_DB"
        OCR = "OCR"
        X2TEXT = "X2TEXT"
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/vectordb/register.py
    Lines: 19-30

    def register_adapters(adapters: dict[str, Any]) -> None:
            current_directory = os.path.dirname(os.path.abspath(__file__))
            package = "unstract.sdk1.adapters.vectordb"
    
            for adapter in os.listdir(current_directory):
                adapter_path = os.path.join(current_directory, adapter, Common.SRC_FOLDER)
                # Check if the item is a directory and not a
                # special directory like __pycache__
                if os.path.isdir(adapter_path) and not adapter.startswith("__"):
                    VectorDBRegistry._build_adapter_list(adapter, package, adapters)
            if len(adapters) == 0:
                logger.warning("No vectorDB adapter found.")
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/ocr/register.py
    Lines: 19-30

    def register_adapters(adapters: dict[str, Any]) -> None:
            current_directory = os.path.dirname(os.path.abspath(__file__))
            package = "unstract.sdk1.adapters.ocr"
    
            for adapter in os.listdir(current_directory):
                adapter_path = os.path.join(adapter_path??, adapter, Common.SRC_FOLDER)
                # Check if the item is a directory and not a
                # special directory like __pycache__
                if os.path.isdir(adapter_path) and not adapter.startswith("__"):
                    OCRRegistry._build_adapter_list(adapter, package, adapters)
            if len(adapters) == 0:
                logger.warning("No ocr adapter found.")
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/x2text/register.py
    Lines: 19-30

    def register_adapters(adapters: dict[str, Any]) -> None:
            current_directory = os.path.dirname(os.path.abspath(__file__))
            package = "unstract.sdk1.adapters.x2text"
    
            for adapter in os.listdir(current_directory):
                adapter_path = os.path.join(current_directory, adapter, Common.SRC_FOLDER)
                # Check if the item is a directory and not a
                # special directory like __pycache__
                if os.path.isdir(adapter_path) and not adapter.startswith("__"):
                    X2TextRegistry._build_adapter_list(adapter, package, adapters)
            if len(adapters) == 0:
                logger.warning("No X2Text adapter found.")
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/embedding1/gemini.py
    Lines: 9-40

    def get_id() -> str:
            return "gemini|5c2a36b8-0b8e-4f26-82c0-9f3b564cb066"
    
    def get_name() -> str:
            return "Gemini"
    
    def get_description() -> str:
            return "Gemini embedding adapter"
    
    def get_provider() -> str:
            return "gemini"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/Gemini.png"
    
    def get_adapter_type() -> AdapterTypes:
            return AdapterTypes.EMBEDDING
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/llm1/openai_compatible.py
    Lines: 20-51

    def get_id() -> str:
            return "openaicompatible|b6d10f33-2c41-49fc-a8c2-58d2b247fc09"
    
    def get_name() -> str:
            return "OpenAI Compatible"
    
    def get_description() -> str:
            return DESCRIPTION
    
    def get_provider() -> str:
            return "custom_openai"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/OpenAICompatible.png"
    
    def get_adapter_type() -> AdapterTypes:
            return AdapterTypes.LLM
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/ocr/google_document_ai/src/google_document_ai.py
    Lines: 51-64

    def get_id() -> str:
            return "googledocumentai|1013f64b-ecc9-4e35-b986-aebd60fb55d7"
    
    def get_name() -> str:
            return "GoogleDocumentAI"
    
    def get_description() -> str:
            return "Google Document AI OCR"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/GoogleDocumentAI.png"
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/x2text/llama_parse/src/llama_parse.py
    Lines: 31-40

    def get_id() -> str:
            return "llamaparse|78860239-b3cc-4cc5-b3de-f84315f75d14"
    
    def get_name() -> str:
            return "LlamaParse"
    
    def get_description() -> str:
            return "LlamaParse X2Text"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/llama-parse.png"
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/x2text/llm_whisperer/src/llm_whisperer.py
    Lines: 49-62

    def get_id() -> str:
            return "llmwhisperer|0a1647f0-f65f-410d-843b-3d979c78350e"
    
    def get_name() -> str:
            return "LLMWhisperer"
    
    def get_description() -> str:
            return "LLMWhisperer X2Text"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/LLMWhisperer.png"
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/x2text/llm_whisperer_v2/src/llm_whisperer_v2.py
    Lines: 42-55

    def get_id() -> str:
            return "llmwhisperer|a5e6b8af-3e1f-4a80-b006-d017e8e67f93"
    
    def get_name() -> str:
            return "LLMWhisperer V2"
    
    def get_description() -> str:
            return "LLMWhisperer V2 X2Text"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/LLMWhispererV2.png"
    
  • File: unstract/sdk1/src/unstract/sdk1/adapters/embeding1/gemini.py
    Lines: (duplicate reference) 9-40
    [Note: Refer to the Gemini embedding snippet above for the embedding1 gemini adapter.]

  • File: unstract/sdk1/src/unstract/sdk1/adapters/vectordb/pinecone/src/pinecone.py
    Lines: 49-62

    def get_id() -> str:
            return "pinecone|83881133-485d-4ecc-b1f7-0009f96dc74a"
    
    def get_name() -> str:
            return "Pinecone"
    
    def get_description() -> str:
            return "Pinecone VectorDB"
    
    def get_icon() -> str:
            return "/icons/adapter-icons/pinecone.png"
    
  • File: unstract/sdk1/src/unstract/sdk1/vector_db.py
    Lines: 192-202

    def add(
            self,
            ref_doc_id: str,
            nodes: list[BaseNode],
        ) -> list[str]:
            if not self.vector_db_adapter_class:
                raise VectorDBError("Vector DB is not initialised properly")
            self.vector_db_adapter_class.add(
                ref_doc_id=ref_doc_id,
                nodes=nodes,
            )
    

Failed to handle agent chat message. Please try again.

@ritwik-g ritwik-g added enhancement New feature or request good first issue Good for newcomers labels May 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request good first issue Good for newcomers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants